691 research outputs found

    Thresholding in Learning Theory

    Full text link
    In this paper we investigate the problem of learning an unknown bounded function. We be emphasize special cases where it is possible to provide very simple (in terms of computation) estimates enjoying in addition the property of being universal : their construction does not depend on a priori knowledge on regularity conditions on the unknown object and still they have almost optimal properties for a whole bunch of functions spaces. These estimates are constructed using a thresholding schema, which has proven in the last decade in statistics to have very good properties for recovering signals with inhomogeneous smoothness but has not been extensively developed in Learning Theory. We will basically consider two particular situations. In the first case, we consider the RKHS situation. In this case, we produce a new algorithm and investigate its performances in L_2(ρ^_X)L\_2(\hat\rho\_X). The exponential rates of convergences are proved to be almost optimal, and the regularity assumptions are expressed in simple terms. The second case considers a more specified situation where the X_iX\_i's are one dimensional and the estimator is a wavelet thresholding estimate. The results are comparable in this setting to those obtained in the RKHS situation as concern the critical value and the exponential rates. The advantage here is that we are able to state the results in the L_2(ρ_X)L\_2(\rho\_X) norm and the regularity conditions are expressed in terms of standard H\"older spaces

    Anisotropic Denoising in Functional Deconvolution Model with Dimension-free Convergence Rates

    Get PDF
    In the present paper we consider the problem of estimating a periodic (r+1)(r+1)-dimensional function ff based on observations from its noisy convolution. We construct a wavelet estimator of ff, derive minimax lower bounds for the L2L^2-risk when ff belongs to a Besov ball of mixed smoothness and demonstrate that the wavelet estimator is adaptive and asymptotically near-optimal within a logarithmic factor, in a wide range of Besov balls. We prove in particular that choosing this type of mixed smoothness leads to rates of convergence which are free of the "curse of dimensionality" and, hence, are higher than usual convergence rates when rr is large. The problem studied in the paper is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Indeed, we show that unless the function ff is very smooth in the direction of the profiles, very spatially inhomogeneous along the other direction and the number of profiles is very limited, the functional deconvolution solution has a much better precision compared to a combination of MM solutions of separate convolution equations. A limited simulation study in the case of r=1r=1 confirms theoretical claims of the paper.Comment: 29 pages, 1 figure, 1 tabl

    Radon needlet thresholding

    Get PDF
    We provide a new algorithm for the treatment of the noisy inversion of the Radon transform using an appropriate thresholding technique adapted to a well-chosen new localized basis. We establish minimax results and prove their optimality. In particular, we prove that the procedures provided here are able to attain minimax bounds for any Lp\mathbb {L}_p loss. It s important to notice that most of the minimax bounds obtained here are new to our knowledge. It is also important to emphasize the adaptation properties of our procedures with respect to the regularity (sparsity) of the object to recover and to inhomogeneous smoothness. We perform a numerical study that is of importance since we especially have to discuss the cubature problems and propose an averaging procedure that is mostly in the spirit of the cycle spinning performed for periodic signals

    Antenna Measurement

    Get PDF
    ISBN 978-953-7619-67-

    Localized spherical deconvolution

    Get PDF
    We provide a new algorithm for the treatment of the deconvolution problem on the sphere which combines the traditional SVD inversion with an appropriate thresholding technique in a well chosen new basis. We establish upper bounds for the behavior of our procedure for any Lp\mathbb {L}_p loss. It is important to emphasize the adaptation properties of our procedures with respect to the regularity (sparsity) of the object to recover as well as to inhomogeneous smoothness. We also perform a numerical study which proves that the procedure shows very promising properties in practice as well.Comment: Published in at http://dx.doi.org/10.1214/10-AOS858 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Testing the isotropy of high energy cosmic rays using spherical needlets

    Full text link
    For many decades, ultrahigh energy charged particles of unknown origin that can be observed from the ground have been a puzzle for particle physicists and astrophysicists. As an attempt to discriminate among several possible production scenarios, astrophysicists try to test the statistical isotropy of the directions of arrival of these cosmic rays. At the highest energies, they are supposed to point toward their sources with good accuracy. However, the observations are so rare that testing the distribution of such samples of directional data on the sphere is nontrivial. In this paper, we choose a nonparametric framework that makes weak hypotheses on the alternative distributions and allows in turn to detect various and possibly unexpected forms of anisotropy. We explore two particular procedures. Both are derived from fitting the empirical distribution with wavelet expansions of densities. We use the wavelet frame introduced by [SIAM J. Math. Anal. 38 (2006b) 574-594 (electronic)], the so-called needlets. The expansions are truncated at scale indices no larger than some J{J^{\star}}, and the LpL^p distances between those estimates and the null density are computed. One family of tests (called Multiple) is based on the idea of testing the distance from the null for each choice of J=1,,JJ=1,\ldots,{J^{\star}}, whereas the so-called PlugIn approach is based on the single full J{J^{\star}} expansion, but with thresholded wavelet coefficients. We describe the practical implementation of these two procedures and compare them to other methods in the literature. As alternatives to isotropy, we consider both very simple toy models and more realistic nonisotropic models based on Physics-inspired simulations. The Monte Carlo study shows good performance of the Multiple test, even at moderate sample size, for a wide sample of alternative hypotheses and for different choices of the parameter J{J^{\star}}. On the 69 most energetic events published by the Pierre Auger Collaboration, the needlet-based procedures suggest statistical evidence for anisotropy. Using several values for the parameters of the methods, our procedures yield pp-values below 1%, but with uncontrolled multiplicity issues. The flexibility of this method and the possibility to modify it to take into account a large variety of extensions of the problem make it an interesting option for future investigation of the origin of ultrahigh energy cosmic rays.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS619 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore